Abstract
Introduction:
The rapidly evolving treatment landscape in multiple myeloma (MM) presents a challenge for continuing medical education (CME), requiring adaptive strategies that respond to new evidence and shifting clinical priorities. To address this, we explored the use of artificial intelligence (AI) to extract and synthesize clinical insights from CME learner data, with the goal of informing education strategy aligned to NCCN Clinical Practice Guidelines in Oncology (Version 4.2024) and key data such as the IMROZ trial (ASCO 2024).
Methods:
We analyzed CME learner response data across four quarters of 2024. Our approach utilized ChatGPT to assist in clustering learner assessment questions into clinically meaningful themes, track performance across time, and reveal strategic education needs.
This process included four structured AI iterations:
Iteration 1: Initial prompts produced six themes — Induction Therapy, Relapsed/Refractory MM, Risk Stratification, Treatment Sequencing, Health Equity, and AI in Oncology.
Iteration 2: Prompt refinement to repackage insights into a leadership-ready report with timeline mapping, strategic recommendations, and cross-referencing to NCCN and ASCO guidance.
Iteration 3: Additional data from physician-authored insight decks and field commentary were integrated to refine clinical relevance.
Iteration 4: Expert scientific feedback was incorporated, and themes were consolidated into four final categories.
Results:
A total of 110 questions with 17,783 responses were analyzed. The following final categories emerged consistently across quarters: (1) induction/quadruplet therapy, (2) adverse event (AE) monitoring, (3) bispecifics and CAR-T integration, (4) diagnostics & MRD utilization. AI-assisted clustering and timeline mapping showed a quarter-over-quarter rise in comprehension of novel induction with quadruplet regimens. This was supported by positive physician commentary referencing IMROZ trial alignment. In contrast, AE mitigation and sequencing decisions remained a healthcare gap with scores plateauing or decreasing across quarters despite repeated exposure. Cross-validation with HCP feedback confirmed needs for payer policy education, REMS logistics, and regionally adapted toolkits. AI-enabled analysis also uncovered learner demand for EMR-integrated pathways, dynamic CME tailoring, and personalization of content based on role or practice setting.
Conclusions:
This AI-enhanced clinical insight generation process enabled scalable, structured extraction of clinical education trends from CME datasets in MM. The combination of data-driven categorization, physician validation, and timeline mapping revealed actionable patterns in learner performance and education gaps.
The multi-iteration approach, spanning automated clustering, prompt refinement, and physician validation, was essential to arriving at final, high-utility themes. Future directions include deploying adaptive learning modules, AI-powered insight dashboards, and EMR-linked decision supports.
This feature is available to Subscribers Only
Sign In or Create an Account Close Modal